-
Notifications
You must be signed in to change notification settings - Fork 184
combine json chunks from requests #4317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
combine json chunks from requests #4317
Conversation
Thanks for the fix @jiapingzeng! Wondering if you see any additional latency due to combining the chunks? |
I didn't notice additional latency from my side. There are some concerns of collectList() running out of memory when waiting for an infinite upstream: https://stackoverflow.com/a/72042404, but I don't think that applies here as agent execute requests are a few kBs at most. |
Don't think test failure is related to my change. Tried |
84ea119
to
8292d74
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #4317 +/- ##
============================================
- Coverage 80.14% 80.14% -0.01%
- Complexity 10172 10174 +2
============================================
Files 854 854
Lines 44219 44227 +8
Branches 5113 5114 +1
============================================
+ Hits 35440 35445 +5
- Misses 6630 6633 +3
Partials 2149 2149
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Looks like CI is passing on Windows but still failing on Linux due to unrelated errors. I will add UTs and run sanity tests locally and then we can merge this. |
Signed-off-by: Jiaping Zeng <[email protected]>
0a7734d
to
76d0400
Compare
Signed-off-by: Jiaping Zeng <[email protected]>
2a767b6
to
8f6c270
Compare
Added UTs for combineChunks. Not possible to create UT for prepareRequest as chunk collection happens async. However, ran the below sanity tests:
|
return BytesReference.fromByteBuffer(ByteBuffer.wrap(buffer.toByteArray())); | ||
} catch (IOException e) { | ||
log.error("Failed to combine chunks", e); | ||
throw new OpenSearchStatusException("Failed to combine request chunks", RestStatus.INTERNAL_SERVER_ERROR, e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We spoke about not using this OpenSearchStatusException as it will throw 500 errors. Have you tested on your end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can define HTTP error code in OpenSearchStatusException.
I see here we define it as 500 error. @mingshl Do you have any concern? Should we define as some other error code ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this is Server-side I/O failure while handling a request. If that's the case, then 500 should be the right status code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think 500 makes sense here as the chunks are failing to be combined which is I/O failure.
However when testing with invalid JSON once all the chunks are combined this 400 doesn't seem to be getting through. Will enhance error handling on this part instead.
return Mono.error(new OpenSearchStatusException("Failed to parse request", RestStatus.BAD_REQUEST, e));
Error handling sample requests:
|
Signed-off-by: Jiaping Zeng <[email protected]>
291315f
to
2bc0ae1
Compare
Description
[Describe what this change achieves]
Related Issues
4314
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.